biological credit assignment
Biological credit assignment through dynamic inversion of feedforward networks
Learning depends on changes in synaptic connections deep inside the brain. In multilayer networks, these changes are triggered by error signals fed back from the output, generally through a stepwise inversion of the feedforward processing steps. The gold standard for this process --- backpropagation --- works well in artificial neural networks, but is biologically implausible. Several recent proposals have emerged to address this problem, but many of these biologically-plausible schemes are based on learning an independent set of feedback connections. This complicates the assignment of errors to each synapse by making it dependent upon a second learning problem, and by fitting inversions rather than guaranteeing them.
Supplementary Materials: Biological Credit Assignment through Dynamic Inversion of Feedforward Networks
Accurate convergence of the backward pass dynamics is crucial for the success of dynamic inversion. As noted in Section 3.2, the eigenvalues of the matrix Here we provide more details for single-loop dynamic inversion (SLDI). As mentioned in the main text, we suspect that dynamic inversion may relate to second-order methods. Following the derivations in Botev et al. (2017), we can write the block-diagonal sample A more thorough analysis is merited on the relationship between Eqs. This was not necessary for MNIST classification or MNIST autoencoding.
Review for NeurIPS paper: Biological credit assignment through dynamic inversion of feedforward networks
As the authors note, the stability of the feedback dynamics depends on a condition on the eigenvalues of WB - alpha*I. Without it, the feedback dynamics will yield unpredictable results and presumably not perform effective credit assignment. This condition is extremely unlikely to be satisfied generically, and is essentially the analog of sign-symmetry in forward and backward weights when one considers pseudoinverses rather than transposes. The authors manually enforce that it be satisfied at initialization, and manually adjust the backward weights if the condition is violated during training. These manual initialization choices and adjustments are doing much of the work of credit assignment in the authors' algorithm -- I can't tell from the results as presented how helpful the dynamic inversion really is.
Biological credit assignment through dynamic inversion of feedforward networks
Learning depends on changes in synaptic connections deep inside the brain. In multilayer networks, these changes are triggered by error signals fed back from the output, generally through a stepwise inversion of the feedforward processing steps. The gold standard for this process --- backpropagation --- works well in artificial neural networks, but is biologically implausible. Several recent proposals have emerged to address this problem, but many of these biologically-plausible schemes are based on learning an independent set of feedback connections. This complicates the assignment of errors to each synapse by making it dependent upon a second learning problem, and by fitting inversions rather than guaranteeing them.